|
In statistics, additive smoothing, also called Laplace smoothing〔C.D. Manning, P. Raghavan and M. Schütze (2008). ''Introduction to Information Retrieval''. Cambridge University Press, p. 260.〕 (not to be confused with Laplacian smoothing), or Lidstone smoothing, is a technique used to smooth categorical data. Given an observation x = (''x''1, …, ''x''''d'') from a multinomial distribution with ''N'' trials and parameter vector θ = (''θ''1, …, ''θ''''d''), a "smoothed" version of the data gives the estimator: : where ''α'' > 0 is the smoothing parameter (''α'' = 0 corresponds to no smoothing). Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical estimate ''xi'' / ''N'', and the uniform probability 1/''d''. Using Laplace's rule of succession, some authors have argued that ''α'' should be 1 (in which case the term add-one smoothing is also used), though in practice a smaller value is typically chosen. From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter ''α'' as a prior. In the special case where the number of categories is 2, this is equivalent to using a Beta distribution as the conjugagate prior for the parameters of Binomial distribution. ==History== Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the sunrise problem).〔(Lecture 5 | Machine Learning (Stanford) ) at 1h10m into the lecture〕 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Additive smoothing」の詳細全文を読む スポンサード リンク
|